Working with mesh data

Right now, we have an application ready and setup, but nothing on screen. Or more exactly, we only have a constant color for the whole window. It is time to change that !

In this tutorial, we will present how to bring a 3D mesh to the screen. This will go through creating it out of the API or loading it through an external file.
All code given here should be done before entering the run loop.

Let's create a triangle !

Working with the API

Let's start by creating that mesh from the code. This will enable us to have maximum control over what it will look like.

Let's start by including everything necessary :

#include <NilkinsGraphics/Meshes/Mesh.h> #include <NilkinsGraphics/Meshes/MeshManager.h> // Needed to manipulate vertices from code #include <NilkinsGraphics/Meshes/VertexComponent.h>

Now, we can first create the mesh :

nkGraphics::Mesh* mesh = nkGraphics::MeshManager::getInstance()->createOrRetrieve("Triangle") ;

This is a pattern you will often find in the component, and in the engine in general :

  1. Create a resource through the dedicated manager
  2. Use it
  3. Erase it, again, through the dedicated manager

In this case, the MeshManager is responsible for all memory allocations concerning meshes.
Creating a resource goes through the createOrRetrieve function, creating the resource if unavailable, or retrieving the one already there else.

This pattern has some benefits :

Now that the mesh is created, we need to set its data. First, we request it to create dedicated arrays for that :

nkGraphics::VertexComponent* pointArray = mesh->getNewVertexArray(3) ; unsigned int* indexArray = mesh->getNewIndexArray(3) ;

First, we request a vertex array. In this array, vertex data, like position or normals, will be set.
What we requested here is the "unpacked" version : we have retrieved data expressed through a structure for easy filling. The drawback is that the component will need to pack the data to feed the GPU.

We also request the index buffer, describing how to link the vertices to form the triangles.
The index buffer is always already packed and depends on the topology given for the mesh, within the composition we will see later.

Once we have that, what is left is feeding the data :

// Vertex data pointArray[0]._position = nkMaths::Vector(-1.f, -1.f, 10.f) ; pointArray[1]._position = nkMaths::Vector(0.f, 1.f, 10.f) ; pointArray[2]._position = nkMaths::Vector(1.f, -1.f, 10.f) ; // Index data indexArray[0] = 0 ; indexArray[1] = 2 ; indexArray[2] = 1 ;

Vertex data will only use the position here. The indices simply link all of them into a triangle, caring for the front of the surface defined as counter clockwise, by default. As such, we define one triangle as being made by the points at indices 0, 2, and 1 in the vertex data, so that when the camera faces the Z direction the triangle is facing it.

The vertices data is now correctly defined. What is left is to alter the composition of the mesh. This gives information about the way the data defined needs to be interpreted. As such :

nkGraphics::VertexComposition compo ; compo._position = true ; compo._uv = false ; compo._normal = false ; compo._topology = nkGraphics::PRIMITIVE_TOPOLOGY_TRIANGLELIST ; mesh->setVertexComposition(compo) ;

By filling the VertexComposition structure, we can give all required information about what our data is like :

We can then communicate that to the mesh. Once the mesh knows all of that, we can request a loading of its resource.

Here is another pattern you will often encounter in the engine. Resources are always set with :

  1. Create the resource. It can be used right away, with a default behaviour
  2. Change all parameters required
  3. Request a loading of the resource. If it succeeds, the behaviour will now use the parameters provided
  4. Use, change parameters and reload if required
  5. Erase, unload

Such a pattern allows :

With that in mind, a last call has to be made :

mesh->load() ;

Now our mesh is ready to be used ! If the loading step fails, this method will return false and it will log what went wrong.

Using the render queue

Now that the mesh is ready, we need to tell the component that we want it painted during the rendering step. For that, we need to tinker with render queues.
A render queue is exactly what its name implies : it queues objects that need to be rendered. They are used within the passes used for image composition. We will cover this in a later tutorial, so for now let's focus on the rendering queue. First, include :

#include <NilkinsGraphics/RenderQueues/RenderQueue.h> #include <NilkinsGraphics/RenderQueues/RenderQueueManager.h>

With those includes, we can start messing with the render queues :

nkGraphics::RenderQueue* rq = nkGraphics::RenderQueueManager::getInstance()->get(0) ;

Here we get the queue number 0. This is the queue used by default image composition when painting the scene. As a result, by altering it, we will change what is rendered right away.

nkGraphics::Entity* ent = rq->addEntity() ; nkGraphics::SubEntity* subEnt = ent->addChild() ; subEnt->setMesh(mesh) ;

What is happening here is that we are adding an "entity" to the rendering queue. An entity is representing an object enqueued for rendering. On it, we will be able to set all information we need.
An entity is constituted by sub entities. Each sub entity represents a mesh that can be rendered. As a result, one entity can be composed of many meshes.
To set the mesh, we add an entity to the queue, declare a sub entity on it, before being able to set the mesh.

With all of that we are ready to give a new run on our program. Let's see what it looks like :

Beautiful triangle
Our triangle is visible on screen !

To recap, for a mesh, we need to :

  1. Create the mesh, prepare its data and load it
  2. Request a render queue we want to see it in. A mesh can be set on as many queues as needed
  3. Prepare the entity on the queue, and its sub entity with the mesh

By respecting all these steps, a mesh can easily be part of the rendering pipeline.

Let's get a more complicated mesh

Using a file as source

Creating our own mesh from scrach can be useful in some cases, but in others we might want to load an already prepared mesh, out of a file. Let's load a sphere .obj file, that you will find within the release Data folder.
Also, keep track of the folder given to the ResourceManager. As a reminder, the working path set within it will be considered the root folder to use.

First, let's go back to a freshly requested mesh by deleting the mesh setup code. Then, include :

#include <NilkinsGraphics/Meshes/Load/MeshLoader.h>

The MeshLoader allows to fill a mesh's buffer with data files. For supported formats, see the API documentation. Now to ask it to prepare the mesh :

nkGraphics::MeshLoader::fillBufferFromFile(mesh, "sphere.obj") ; mesh->load() ;

The loader will load the file, fill the mesh data, but leave the mesh unloaded. This is done so that further tweakings can be brought if required. As a result, after filling the buffers, we need to load it. Let's see what we now have on screen :

Mesh disappeared
A sphere, yes yes

Current image is not very encouraging, but fear not ! This is normal. First, ensure that the MeshLoader is not complaining that the file cannot be loaded. If it is, ensure the path you give is correct and relative to the working path set within the ResourceManager.
Then, if nothing is logged, the mesh has been successfully loaded and is being displayed. But why aren't we seeing anything ?

Positioning meshes in the world

You maybe guessed it, our mesh is simply centered right where our camera is, in (0, 0, 0). As a result, we are seeing the sphere from inside. The back faces being culled away by default, the component doesn't render anything.

The way to correct that is to move it, through the use of a node graph. Each Entity can be linked to a Node, that will give its position, orientation, and scale within a scene graph. By default, an Entity won't be tied to any node, which means its world coordinates will correspond to its model coordinates.
Let's change that, and include :

#include <NilkinsGraphics/Graph/Node.h> #include <NilkinsGraphics/Graph/NodeManager.h>

With all of this, we will be able to manipulate nodes, like this :

nkGraphics::Node* node = nkGraphics::NodeManager::getInstance()->create() ; node->setPositionAbsolute(nkMaths::Vector(0.f, 0.f, 10.f)) ; ent->setParentNode(node) ;

First, request a creation from the manager. We don't request any name as we don't require any, the node will be named after a counter.
Then, this node will be translated within the world, as we set its absolute position. We set it at 10 units in front of the camera.
The node is then assigned to the entity, so that it can transform its positioning. Let's see the effect of those lines :

The sphere
See, it was there this whole time !

Now our sphere is set further away from the camera, and we can witness it in its glorious shape, from outside. Using the node graph is optional, but when you need to move objects around, it is the way to go.

And this covers the basic interactions with meshes !